AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Mixture of Experts acceleration

# Mixture of Experts acceleration

Qwen3 30B A1.5B 64K High Speed NEO Imatrix MAX Gguf
An optimized version based on the Qwen3-30B-A3B Mixture of Experts model, improving speed by reducing the number of active experts, supporting 64k context length, and suitable for various text generation tasks.
Large Language Model Supports Multiple Languages
Q
DavidAU
508
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase